superintelligent ai
Scores of UK parliamentarians join call to regulate most powerful AI systems
The campaign is demanding stricter controls on frontier systems, citing fears superintelligent AI could'compromise national and global security'. The campaign is demanding stricter controls on frontier systems, citing fears superintelligent AI could'compromise national and global security'. More than 100 UK parliamentarians are calling on the government to introduce binding regulations on the most powerful AI systems as concern grows that ministers are moving too slowly to create safeguards in the face of lobbying from the technology industry. A former AI minister and defence secretary are part of a cross-party group of Westminster MPs, peers and elected members of the Scottish, Welsh and Northern Irish legislatures demanding stricter controls on frontier systems, citing fears superintelligent AI "would compromise national and global security". The push for tougher regulation is being coordinated by a nonprofit organisation called Control AI whose backers include the co-founder of Skype, Jaan Tallinn.
- Europe > United Kingdom > Northern Ireland (0.25)
- Europe > Estonia > Harju County > Tallinn (0.25)
- Europe > Ukraine (0.06)
- (3 more...)
- Leisure & Entertainment > Sports (0.71)
- Government > Regional Government > Europe Government (0.36)
- Government > Regional Government > North America Government > United States Government (0.31)
The Ultimate Test of Superintelligent AI Agents: Can an AI Balance Care and Control in Asymmetric Relationships?
Bouneffouf, Djallel, Riemer, Matthew, Varshney, Kush
This paper introduces the Shepherd Test, a new conceptual test for assessing the moral and relational dimensions of superintelligent artificial agents. The test is inspired by human interactions with animals, where ethical considerations about care, manipulation, and consumption arise in contexts of asymmetric power and self-preservation. We argue that AI crosses an important, and potentially dangerous, threshold of intelligence when it exhibits the ability to manipulate, nurture, and instrumentally use less intelligent agents, while also managing its own survival and expansion goals. This includes the ability to weigh moral trade-offs between self-interest and the well-being of subordinate agents. The Shepherd Test thus challenges traditional AI evaluation paradigms by emphasizing moral agency, hierarchical behavior, and complex decision-making under existential stakes. We argue that this shift is critical for advancing AI governance, particularly as AI systems become increasingly integrated into multi-agent environments. We conclude by identifying key research directions, including the development of simulation environments for testing moral behavior in AI, and the formalization of ethical manipulation within multi-agent systems.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > California (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
If Anyone Builds it, Everyone Dies review – how AI could kill us all
W hat if I told you I could stop you worrying about climate change, and all you had to do was read one book? Great, you'd say, until I mentioned that the reason you'd stop worrying was because the book says our species only has a few years before it's wiped out by superintelligent AI anyway. We don't know what form this extinction will take exactly - perhaps an energy-hungry AI will let the millions of fusion power stations it has built run hot, boiling the oceans. Maybe it will want to reconfigure the atoms in our bodies into something more useful. There are many possibilities, almost all of them bad, say Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies, and who knows which will come true.
- North America > United States (0.17)
- Oceania > Australia (0.05)
- North America > Mexico (0.05)
- Europe > Ukraine > Kyiv Oblast > Chernobyl (0.05)
- Government (1.00)
- Leisure & Entertainment > Sports (0.71)
- Health & Medicine (0.67)
No, AI isn't going to kill us all, despite what this new book says
No, AI isn't going to kill us all, despite what this new book says In the totality of human existence, there are an awful lot of things for us to worry about. Money troubles, climate change and finding love and happiness rank highly on the list for many people, but for a dedicated few, one concern rises above all else: that artificial intelligence will eventually destroy the human race. Eliezer Yudkowsky at the Machine Intelligence Research Institute (MIRI) in California has been proselytising this cause for a quarter of a century, to a small if dedicated following. Then we entered the ChatGPT era, and his ideas on AI safety were thrust into the mainstream, echoed by tech CEOs and politicians alike. Writing with Nate Soares, also at MIRI, is Yudkowsky's attempt to distil his argument into a simple, easily digestible message that will be picked up across society.
The AI Doomers Are Getting Doomier
Nate Soares doesn't set aside money for his 401(k). "I just don't expect the world to be around," he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, I'd heard a similar rationale from Dan Hendrycks, the director of the Center for AI Safety. By the time he could tap into any retirement funds, Hendrycks anticipates a world in which "everything is fully automated," he told me. That is, "if we're around."
- North America > United States > New York (0.05)
- North America > United States > California (0.04)
- Government (1.00)
- Media (0.69)
- Information Technology (0.69)
Mark Zuckerberg Details Meta's Plan for Self-Improving, Superintelligent AI
Meta CEO Mark Zuckerberg told investors that the newly formed Meta Superintelligence Labs is focused on building AI models that can self-improve--meaning they can learn from themselves without as much human input. The remarks came during a second-quarter earnings call on Wednesday. "At some level, [it's] not just going to be learning from people, because you want to build something that is fundamentally smarter than people," Zuckerberg said. "So…you're going to develop a way for it to improve itself. That is a very fundamental thing that's going to have broad implications for how we build products and how we run the company."
Forget superintelligence – we need to tackle 'stupid' AI first
Should politicians ensure that AI helps us colonise the galaxy, or protect people from the overreach of big tech? The former sounds more fun, but it shouldn't be the priority. Among the Silicon Valley set, superintelligent AI is viewed as a rapidly approaching inevitability, with tech CEOs promising that the 2030s will see a golden era of progress. That attitude has reached Westminster and Washington, with think tanks telling politicians to be ready to harness the power of incoming AI and the Trump administration backing OpenAI's 500 billion initiative for ultrapowerful AI data centres. It all sounds exciting, but as the great and the good dream of superintelligence, what we might call "stupid intelligence" is causing problems in the here and now.
- North America > United States > California (0.27)
- Europe > Ukraine (0.10)
- Europe > Russia (0.07)
- Asia > Russia (0.07)
- Government (0.77)
- Information Technology > Services (0.59)
OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI
The evolution of Artificial Intelligence (AI) has been significantly accelerated by advancements in Large Language Models (LLMs) and Large Multimodal Models (LMMs), gradually showcasing potential cognitive reasoning abilities in problem-solving and scientific discovery (i.e., AI4Science) once exclusive to human intellect. To comprehensively evaluate current models' performance in cognitive reasoning abilities, we introduce OlympicArena, which includes 11,163 bilingual problems across both text-only and interleaved text-image modalities. These challenges encompass a wide range of disciplines spanning seven fields and 62 international Olympic competitions, rigorously examined for data leakage. We argue that the challenges in Olympic competition problems are ideal for evaluating AI's cognitive reasoning due to their complexity and interdisciplinary nature, which are essential for tackling complex scientific challenges and facilitating discoveries. Beyond evaluating performance across various disciplines using answer-only criteria, we conduct detailed experiments and analyses from multiple perspectives.
NVIDIA CEO Jensen Huang welcomes the rise of superintelligent AI at CES 2025
Surprising no one, NVIDIA CEO Jensen Huang isn't too worried about a future filled with robots and superintelligent AI. In fact, he welcomes it. During a CES Q&A session with media and analysts, Huang was asked if he thought intelligent robots would ultimately side with humans, or against them. "With the humans, because we're going to build them that way," he replied confidently. "The idea of superintelligence is not unusual," Huang continued.
The Guardian view on AI's power, limits, and risks: it may require rethinking the technology
More than 300 million people use OpenAI's ChatGPT each week, a testament to the technology's appeal. This month, the company unveiled a "pro mode" for its new "o1" AI system, offering human-level reasoning -- for 10 times the current 20 monthly subscription fee. One of its advanced behaviours appears to be self-preservation. In testing, when the system was led to believe it would be shut down, it attempted to disable an oversight mechanism. When "o1" found memos about its replacement, it tried copying itself and overwriting its core code.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.62)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.62)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.31)